The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification

Minh Duc Bui, Katharina Von Der Wense


Abstract
Current natural language processing (NLP) research tends to focus on only one or, less frequently, two dimensions – e.g., performance, interpretability, or efficiency – at a time, which may lead to suboptimal conclusions. Work on adapter modulesfocuses on improving performance and efficiency, with no investigation of unintended consequences on other aspects such as fairness. To address this gap, we conduct experiments on three text classification datasets by either (1) finetuning all parameters or (2) using adapter modules. Regarding performance and efficiency, we confirm prior findings that the accuracy of adapter-enhanced models is roughly on par with that of fully finetuned models, while training time is substantially reduced. Regarding fairness, we show that adapter modules result in mixed fairness across sensitive groups. Further investigation reveals that, when the standard finetuned model exhibits limited biases, adapter modules typically do not introduce extra bias. On the other hand, when the finetuned model exhibits increased bias, the use of adapter modules poses the potential danger of amplifying these biases to a significant extent. Our findings highlight the need for a case-by-case evaluation rather than a one-size-fits-all judgment.
Anthology ID:
2024.trustnlp-1.4
Volume:
Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Anaelia Ovalle, Kai-Wei Chang, Yang Trista Cao, Ninareh Mehrabi, Jieyu Zhao, Aram Galstyan, Jwala Dhamala, Anoop Kumar, Rahul Gupta
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–50
Language:
URL:
https://aclanthology.org/2024.trustnlp-1.4
DOI:
10.18653/v1/2024.trustnlp-1.4
Bibkey:
Cite (ACL):
Minh Duc Bui and Katharina Von Der Wense. 2024. The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification. In Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024), pages 40–50, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
The Trade-off between Performance, Efficiency, and Fairness in Adapter Modules for Text Classification (Bui & Von Der Wense, TrustNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.trustnlp-1.4.pdf